50 research outputs found
European Option Pricing Formula Under Stochastic Interest Rate
Abstract: This paper reviews the option pricing model and its application, on the basis of former studies, we assume that the interest rate satisfy a given Vasicek stochastic di erential equation, using option pricing by martingale method to study the stochastic interest rate model of European option pricing and obtain its pricing formula. Finally, we compare the di erences between the standard European option pricing formula and European option pricing formula under stochastic interest rate.Key words: Option Pricing; Stochastic Interest Rates; Vasicek model; Brownian motion
Algorithm Hardware Codesign for High Performance Neuromorphic Computing
Driven by the massive application of Internet of Things (IoT), embedded system and Cyber Physical System (CPS) etc., there is an increasing demand to apply machine intelligence on these power limited scenarios. Though deep learning has achieved impressive performance on various realistic and practical tasks such as anomaly detection, pattern recognition, machine vision etc., the ever-increasing computational complexity and model size of Deep Neural Networks (DNN) make it challenging to deploy them onto aforementioned scenarios where computation, memory and energy resource are all limited. Early studies show that biological systems\u27 energy efficiency can be orders of magnitude higher than that of digital systems. Hence taking inspiration from biological systems, neuromorphic computing and Spiking Neural Network (SNN) have drawn attention as alternative solutions for energy-efficient machine intelligence.
Though believed promising, neuromorphic computing are hardly used for real world applications. A major problem is that the performance of SNN is limited compared with DNNs due to the lack of efficient training algorithm. In SNN, neuron\u27s output is spike, which is represented by Dirac Delta function mathematically. Becauase of the non-differentiable nature of spike, gradient descent cannot be directly used to train SNN. Hence algorithm level innovation is desirable. Next, as an emerging computing paradigm, hardware and architecture level innovation is also required to support new algorithms and to explore the potential of neuromorphic computing.
In this work, we present a comprehensive algorithm-hardware codesign for neuromorphic computing. On the algorithm side, we address the training difficulty. We first derive a flexible SNN model that retains critical neural dynamics, and then develop algorithm to train SNN to learn temporal patterns. Next, we apply proposed algorithm to multivariate time series classification tasks to demonstrate its advantages. On hardware level, we develop a systematic solution on FPGA that is optimized for proposed SNN model to enable high performance inference. In addition, we also explore emerging devices, a memristor-based neuromorphic design is proposed. We carry out a neuron and synapse circuit which can replicate the important neural dynamics such as filter effect and adaptive threshold
Exploiting Neuron and Synapse Filter Dynamics in Spatial Temporal Learning of Deep Spiking Neural Network
The recent discovered spatial-temporal information processing capability of
bio-inspired Spiking neural networks (SNN) has enabled some interesting models
and applications. However designing large-scale and high-performance model is
yet a challenge due to the lack of robust training algorithms. A bio-plausible
SNN model with spatial-temporal property is a complex dynamic system. Each
synapse and neuron behave as filters capable of preserving temporal
information. As such neuron dynamics and filter effects are ignored in existing
training algorithms, the SNN downgrades into a memoryless system and loses the
ability of temporal signal processing. Furthermore, spike timing plays an
important role in information representation, but conventional rate-based spike
coding models only consider spike trains statistically, and discard information
carried by its temporal structures. To address the above issues, and exploit
the temporal dynamics of SNNs, we formulate SNN as a network of infinite
impulse response (IIR) filters with neuron nonlinearity. We proposed a training
algorithm that is capable to learn spatial-temporal patterns by searching for
the optimal synapse filter kernels and weights. The proposed model and training
algorithm are applied to construct associative memories and classifiers for
synthetic and public datasets including MNIST, NMNIST, DVS 128 etc.; and their
accuracy outperforms state-of-art approaches
Neuromorphic Online Learning for Spatiotemporal Patterns with a Forward-only Timeline
Spiking neural networks (SNNs) are bio-plausible computing models with high
energy efficiency. The temporal dynamics of neurons and synapses enable them to
detect temporal patterns and generate sequences. While Backpropagation Through
Time (BPTT) is traditionally used to train SNNs, it is not suitable for online
learning of embedded applications due to its high computation and memory cost
as well as extended latency. Previous works have proposed online learning
algorithms, but they often utilize highly simplified spiking neuron models
without synaptic dynamics and reset feedback, resulting in subpar performance.
In this work, we present Spatiotemporal Online Learning for Synaptic Adaptation
(SOLSA), specifically designed for online learning of SNNs composed of Leaky
Integrate and Fire (LIF) neurons with exponentially decayed synapses and soft
reset. The algorithm not only learns the synaptic weight but also adapts the
temporal filters associated to the synapses. Compared to the BPTT algorithm,
SOLSA has much lower memory requirement and achieves a more balanced temporal
workload distribution. Moreover, SOLSA incorporates enhancement techniques such
as scheduled weight update, early stop training and adaptive synapse filter,
which speed up the convergence and enhance the learning performance. When
compared to other non-BPTT based SNN learning, SOLSA demonstrates an average
learning accuracy improvement of 14.2%. Furthermore, compared to BPTT, SOLSA
achieves a 5% higher average learning accuracy with a 72% reduction in memory
cost.Comment: 9 pages,8 figure
Securing the Spike: On the Transferabilty and Security of Spiking Neural Networks to Adversarial Examples
Spiking neural networks (SNNs) have attracted much attention for their high
energy efficiency and for recent advances in their classification performance.
However, unlike traditional deep learning approaches, the analysis and study of
the robustness of SNNs to adversarial examples remains relatively
underdeveloped. In this work we advance the field of adversarial machine
learning through experimentation and analyses of three important SNN security
attributes. First, we show that successful white-box adversarial attacks on
SNNs are highly dependent on the underlying surrogate gradient technique.
Second, we analyze the transferability of adversarial examples generated by
SNNs and other state-of-the-art architectures like Vision Transformers and Big
Transfer CNNs. We demonstrate that SNNs are not often deceived by adversarial
examples generated by Vision Transformers and certain types of CNNs. Lastly, we
develop a novel white-box attack that generates adversarial examples capable of
fooling both SNN models and non-SNN models simultaneously. Our experiments and
analyses are broad and rigorous covering two datasets (CIFAR-10 and CIFAR-100),
five different white-box attacks and twelve different classifier models
Neurogenesis Dynamics-inspired Spiking Neural Network Training Acceleration
Biologically inspired Spiking Neural Networks (SNNs) have attracted
significant attention for their ability to provide extremely energy-efficient
machine intelligence through event-driven operation and sparse activities. As
artificial intelligence (AI) becomes ever more democratized, there is an
increasing need to execute SNN models on edge devices. Existing works adopt
weight pruning to reduce SNN model size and accelerate inference. However,
these methods mainly focus on how to obtain a sparse model for efficient
inference, rather than training efficiency. To overcome these drawbacks, in
this paper, we propose a Neurogenesis Dynamics-inspired Spiking Neural Network
training acceleration framework, NDSNN. Our framework is computational
efficient and trains a model from scratch with dynamic sparsity without
sacrificing model fidelity. Specifically, we design a new drop-and-grow
strategy with decreasing number of non-zero weights, to maintain extreme high
sparsity and high accuracy. We evaluate NDSNN using VGG-16 and ResNet-19 on
CIFAR-10, CIFAR-100 and TinyImageNet. Experimental results show that NDSNN
achieves up to 20.52\% improvement in accuracy on Tiny-ImageNet using ResNet-19
(with a sparsity of 99\%) as compared to other SOTA methods (e.g., Lottery
Ticket Hypothesis (LTH), SET-SNN, RigL-SNN). In addition, the training cost of
NDSNN is only 40.89\% of the LTH training cost on ResNet-19 and 31.35\% of the
LTH training cost on VGG-16 on CIFAR-10